skip to main content


Search for: All records

Creators/Authors contains: "Tu, Cheng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Intermediate features of a pre-trained model have been shown informative for making accurate predictions on downstream tasks, even if the model backbone is kept frozen. The key challenge is how to utilize these intermediate fea- tures given their gigantic amount. We propose visual query tuning (VQT), a simple yet effective approach to aggregate intermediate features of Vision Transformers. Through in- troducing a handful of learnable “query” tokens to each layer, VQT leverages the inner workings of Transformers to “summarize” rich intermediate features of each layer, which can then be used to train the prediction heads of downstream tasks. As VQT keeps the intermediate features intact and only learns to combine them, it enjoys memory efficiency in training, compared to many other parameter- efficient fine-tuning approaches that learn to adapt features and need back-propagation through the entire backbone. This also suggests the complementary role between VQT and those approaches in transfer learning. Empirically, VQT consistently surpasses the state-of-the-art approach that utilizes intermediate features for transfer learning and outperforms full fine-tuning in many cases. Compared to parameter-efficient approaches that adapt features, VQT achieves much higher accuracy under memory constraints. Most importantly, VQT is compatible with these approaches to attain even higher accuracy, making it a simple add- on to further boost transfer learning. Code is available at https://github.com/andytu28/VQT . 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  2. Fractals are geometric shapes that can display complex and self-similar patterns found in nature (e.g., clouds and plants). Recent works in visual recognition have leveraged this property to create random fractal images for model pre-training. In this paper, we study the inverse problem --- given a target image (not necessarily a fractal), we aim to generate a fractal image that looks like it. We propose a novel approach that learns the parameters underlying a fractal image via gradient descent. We show that our approach can find fractal parameters of high visual quality and be compatible with different loss functions, opening up several potentials, e.g., learning fractals for downstream tasks, scientific understanding, etc. 
    more » « less
    Free, publicly-accessible full text available June 27, 2024
  3. Free, publicly-accessible full text available June 1, 2024
  4. null (Ed.)
  5. Missing data are ubiquitous in many domain such as healthcare. When these data entries are not missing completely at random, the (conditional) independence relations in the observed data may be different from those in the complete data generated by the underlying causal process.Consequently, simply applying existing causal discovery methods to the observed data may lead to wrong conclusions. In this paper, we aim at developing a causal discovery method to recover the underlying causal structure from observed data that are missing under different mechanisms, including missing completely at random (MCAR),missing at random (MAR), and missing not at random (MNAR). With missingness mechanisms represented by missingness graphs (m-graphs),we analyze conditions under which additional correction is needed to derive conditional independence/dependence relations in the complete data. Based on our analysis, we propose Miss-ing Value PC (MVPC), which extends the PC algorithm to incorporate additional corrections.Our proposed MVPC is shown in theory to give asymptotically correct results even on data that are MAR or MNAR. Experimental results on both synthetic data and real healthcare applications illustrate that the proposed algorithm is able to find correct causal relations even in the general case of MNAR. 
    more » « less
  6. Abstract

    Miniaturized piezoelectric/magnetostrictive contour‐mode resonators are effective magnetometers by exploiting the ΔEeffect. With dimensions of ≈100–200 µm across and <1 µm thick, they offer high spatial resolution, portability, low power consumption, and low cost. However, a thorough understanding of the magnetic material behavior in these devices is lacking, hindering performance optimization. This manuscript reports on the strong, nonlinear correlation observed between the frequency response of these sensors and the stress‐induced curvature of the resonator plate. The resonance frequency shift caused by DC magnetic fields drops off rapidly with increasing curvature: about two orders of magnitude separate the highest and lowest frequency shift in otherwise identical devices. Similarly, an inverse correlation with the quality factor is found, suggesting a magnetic loss mechanism. The mechanical and magnetic properties are theoretically analyzed using magnetoelastic finite‐element and magnetic domain‐phase models. The resulting model fits the measurements well and is generally consistent with additional results from magneto‐optical domain imaging. Thus, the origin of the observed behavior is identified and broader implications for the design of nanomagnetoelastic devices are derived. By fabricating a magnetoelectric nanoplate resonator with low curvature, a record‐high DC magnetic field sensitivity of 5 Hz nT–1is achieved.

     
    more » « less